Fast rates with high probability in exp-concave statistical learning A Proofs for Stochastic Exp-Concave Optimization
نویسنده
چکیده
This condition is equivalent to stochastic mixability as well as the pseudoprobability convexity (PPC) condition, both defined by Van Erven et al. (2015). To be precise, for stochastic mixability, in Definition 4.1 of Van Erven et al. (2015), take their Fd and F both equal to our F , their P equal to {P}, and ψ(f) = f∗; then strong stochastic mixability holds. Likewise, for the PPC condition, in Definition 3.2 of Van Erven et al. (2015) take the same settings but instead φ(f) = f∗; then the strong PPC condition holds. Now, Theorem 3.10 of Van Erven et al. (2015) states that the PPC condition implies the (strong) central condition. Proof (of Theorem 1) First, from Lemma 1, the convexity of F together with η-exp-concavity implies that (P, `,F) satisfies the η-central condition.
منابع مشابه
Open Problem: Fast Stochastic Exp-Concave Optimization
Stochastic exp-concave optimization is an important primitive in machine learning that captures several fundamental problems, including linear regression, logistic regression and more. The exp-concavity property allows for fast convergence rates, as compared to general stochastic optimization. However, current algorithms that attain such rates scale poorly with the dimension n and run in time O...
متن کاملFast Rates for Exp-concave Empirical Risk Minimization
We consider Empirical Risk Minimization (ERM) in the context of stochastic optimization with exp-concave and smooth losses—a general optimization framework that captures several important learning problems including linear and logistic regression, learning SVMs with the squared hinge-loss, portfolio selection and more. In this setting, we establish the first evidence that ERM is able to attain ...
متن کاملDimension-free Information Concentration via Exp-Concavity
Information concentration of probability measures have important implications in learning theory. Recently, it is discovered that the information content of a log-concave distribution concentrates around their differential entropy, albeit with an unpleasant dependence on the ambient dimension. In this work, we prove that if the potentials of the log-concave distribution are exp-concave, which i...
متن کاملA Simple Analysis for Exp-concave Empirical Minimization with Arbitrary Convex Regularizer
In this paper, we present a simple analysis of fast rates with high probability of empirical minimization for stochastic composite optimization over a finite-dimensional bounded convex set with exponentially concave loss functions and an arbitrary convex regularization. To the best of our knowledge, this result is the first of its kind. As a byproduct, we can directly obtain the fast rate with ...
متن کاملFast rates with high probability in exp-concave statistical learning
We present an algorithm for the statistical learning setting with a bounded expconcave loss in d dimensions that obtains excess risk O(d log(1/δ)/n) with probability 1−δ. The core technique is to boost the confidence of recent in-expectation O(d/n) excess risk bounds for empirical risk minimization (ERM), without sacrificing the rate, by leveraging a Bernstein condition which holds due to exp-c...
متن کامل